Parallel processing is a method of processing tasks where the task is divided into fragments. Each of these fragments is handled by a separate processor within the computer. To be able to parallel process a task, therefore, a computer must have more than one processor running simultaneously. Parallel processing is different than multitasking, as parallel processing can treat various aspects of a single task simultaneously (and so process the whole task faster), whereas multitasking handles various tasks, usually in sequential order. Large parallel computers can have thousands of separate processors. Such systems are especially helpful in simulation experiments which have a large number of independent fragments to model.
A single-processing machine has a speed limitation because all the data must travel along a single channel between one processor and memory. In much the same way adding lanes to a freeway decreases congestion (increasing speed), adding multiple processor paths can speed up processing time. However, even parallel configurations have limits and, after a point, increasing speed is not simply a matter of adding processors. Software must be engineered to make more efficient use of the parallel architecture. With software to exploit the benefits of parallel processing, many researchers feel that computers in the future will rely heavily upon parallel processing formats.